Reliability of the Peer-Review Process for Adverse Event Rating

نویسندگان

  • Alan J. Forster
  • Monica Taljaard
  • Carol Bennett
  • Carl van Walraven
چکیده

BACKGROUND Adverse events are poor patient outcomes caused by medical care. Their identification requires the peer-review of poor outcomes, which may be unreliable. Combining physician ratings might improve the accuracy of adverse event classification. OBJECTIVE To evaluate the variation in peer-reviewer ratings of adverse outcomes; determine the impact of this variation on estimates of reviewer accuracy; and determine the number of reviewers who judge an adverse event occurred that is required to ensure that the true probability of an adverse event exceeded 50%, 75% or 95%. METHODS Thirty physicians rated 319 case reports giving details of poor patient outcomes following hospital discharge. They rated whether medical management caused the outcome using a six-point ordinal scale. We conducted latent class analyses to estimate the prevalence of adverse events as well as the sensitivity and specificity of each reviewer. We used this model and Bayesian calculations to determine the probability that an adverse event truly occurred to each patient as function of their number of positive ratings. RESULTS The overall median score on the 6-point ordinal scale was 3 (IQR 2,4) but the individual rater median score ranged from a minimum of 1 (in four reviewers) to a maximum median score of 5. The overall percentage of cases rated as an adverse event was 39.7% (3798/9570). The median kappa for all pair-wise combinations of the 30 reviewers was 0.26 (IQR 0.16, 0.42; Min = -0.07, Max = 0.62). Reviewer sensitivity and specificity for adverse event classification ranged from 0.06 to 0.93 and 0.50 to 0.98, respectively. The estimated prevalence of adverse events using a latent class model with a common sensitivity and specificity for all reviewers (0.64 and 0.83 respectively) was 47.6%. For patients to have a 95% chance of truly having an adverse event, at least 3 of 3 reviewers are required to deem the outcome an adverse event. CONCLUSION Adverse event classification is unreliable. To be certain that a case truly represents an adverse event, there needs to be agreement among multiple reviewers.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Systematic Review of the Key Success Factors of Sports Event Management: A Resource-based View Approach

Background. Many countries worldwide use sports events as a tool to stimulate both their national and local economies. To gain a competitive advantage, knowledge of sports event success is essential for stakeholders and hosting countries. However, due to the diverse conceptualizations of event success, the knowledge of the issue is fragmented, and there is a lack of comprehensive studies for sc...

متن کامل

The Viewpoints of Alborz University of Medical Sciences’ Faculty Members on Open Peer Review of Journal Articles

Background and Aim: The open peer review process, which is one of the peer-reviewed methods in journals, has been accepted in scientific forums. The aim of this study was to investigate the points of view of university faculty members about the open peer review process of journal articles. Materials and Methods: The study used a descriptive survey. The sample size was calculated using the Coch...

متن کامل

Peer Reviewers’ Comments on Research Articles Submitted by Iranian Researchers

The invisible hands of peer reviewers play a determining role in the eventual fate of submissions to international English-medium journals. This study builds on the assumption that non-native researchers and prospective academic authors may find the whole strive for publication, and more specifically, the tough review process, less threatening if they are aware of journal reviewers’ expectation...

متن کامل

Grant Peer Review: Improving Inter-Rater Reliability with Training

This study developed and evaluated a brief training program for grant reviewers that aimed to increase inter-rater reliability, rating scale knowledge, and effort to read the grant review criteria. Enhancing reviewer training may improve the reliability and accuracy of research grant proposal scoring and funding recommendations. Seventy-five Public Health professors from U.S. research universit...

متن کامل

Self – Others Rating Discrepancy of Task and Contextual Performance

This research compared ratings of task performance and contextual performance from three different sources: self, peer, and supervisor. Participants were service industry employees in the service industries in Yogyakarta, Indonesia. A Sample of 146 employees and 40 supervisors   from the service industries provided ratings of task performance and contextual performance. The results indicated th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره 7  شماره 

صفحات  -

تاریخ انتشار 2012